Error Analysis of Various Forms of Floating Point Dot Products
نویسندگان
چکیده
This thesis discusses both the theoretical and statistical errors obtained by various dot product algorithms. A host of linear algebra methods derive their error behavior directly from the dot product. In particular, many high performance dense systems derive their performance and error behavior overwhelmingly from matrix multiply, and matrix multiply's error behavior is almost wholly attributable to the underlying dot product it uses. As the standard commercial workstation expands to 64-bit memories and multicore processors, problems are increasingly parallelized, and the size of problems increases to meet the growing capacity of the machines. The canonical worst-case analysis makes assumptions about limited problem size which may not be true in the near future. This thesis discusses several implementations of dot product, their theoretical and achieved error bounds, and their suitability for use as performance-critical building block of linear algebra kernels. It also statistically analyzes the source and magnitude of various types of error in dot products, and introduces a measure of error, the Error Bracketing Bit, that is more intuitively useful to programmers and engineers than the standard relative error.
منابع مشابه
Cs - Tr - 2007 - 002 Error Analysis of Various Forms of Floating Point Dot Products ∗
This paper discusses both the theoretical and statistical errors obtained by various dot product algorithms. A host of linear algebra methods derive their error behavior directly from dot product. In particular, most high performance dense systems derive their performance and error behavior overwhelmingly from matrix multiply, and matrix multiply’s error behavior is almost wholly attributable t...
متن کاملReducing Floating Point Error in Dot Product Using the Superblock Family of Algorithms
This paper discusses both the theoretical and statistical errors obtained by various well-known dot products, from the canonical to pairwise algorithms, and introduces a new and more general framework that we have named superblock which subsumes them and permits a practitioner to make trade-offs between computational performance, memory usage, and error behavior. We show that algorithms with lo...
متن کاملAccurate Floating Point Arithmetic through Hardware Error-Free Transformations
This paper presents a hardware approach to performing accurate floating point addition and multiplication using the idea of errorfree transformations. Specialized iterative algorithms are implemented for computing arbitrarily accurate sums and dot products. The results of a Xilinx Virtex 6 implementation are given, area and performance are compared against standard floating point units and it i...
متن کاملResumos MS 3 : IntMath - TSD : Interval Mathematics and Connections in Teaching and Scientic Development
Dot products are one of the most frequent operations in numerical computations. A special feature of the C++-class library C-XSC for scientific computing is the computation of such dot products with high accuracy. A long accumulator is provided in the form of special dot precision variables which may store the intermediate result of a dot product without any rounding error. Only the final resul...
متن کاملDesign-space exploration for the Kulisch accumulator
Floating-point sums and dot products accumulate rounding errors that may render the result very inaccurate. To address this, Kulisch proposed to use an internal accumulator large enough to cover the full exponent range of floating-point. With it, sums and dot products become exact operations. This idea failed to materialize in general purpose processors, as it was considered to slow and/or too ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2007